A defense of Senexism (Deathism)
EDIT: Incorporated suggestions from comments: Moved off-topic parts into comments, improved formatting, corrected links.
Definition
The LW post Value Deathism differntiates between the illusory nature of death and the ‘desirability’ of death called deathism proper. This post is about the latter. Where desirability is meant in a general sense and not (only) in the sense of desirable for an individual.
I propose a different more neutral term for deathism: Senexism—from the latin adjective senex—old. I propose this because death is only the end of an aging process and by focussing on the ultimate and emotionally disturbing result one loads the topic with negative connotations. Senescence on the other hand—though unwanted—has also positive connotations of experience and humility. This also nicely splits off (or reduces applicability of) death by accident.
Outline
My defense is twofold. First I address the (emotional) pain and loss death causes and point out adaptive affects of the coping mechanisms humans have. Second I address the actual benefits senescence and death has—not for the individual but for the group. Thus the latter is an utilitarian argument for death actually.
I will provide current research results for these points. At the end I will conclude with an opinion piece on what this means for rationalists and an outlook how this applies in light of the singularity.
Fear of Death
How does (fear of) death affect you?
Terror Management Theory (TMT) posits that
people cope with mortality by creating beliefs and values that promise a sense of immortality.
And research supports the premise that these beliefs are
(a) defended more when people are reminded of death and
(b) protect people from mortality concerns.
Some more scientifically validated claims of TMT are (nicely presented by psychology today):
Death reminders cause people to self-enhance and protect self-esteem, such as by agreeing more with positive feedback and taking more credit for success and identify more with members of their own group, and even to rate them as more unique from other animals.
One can see this even here on LW e.g. in links from Death and also in the defenses of cryonics—which look like an afterlife meme.
Applied to this post this means that you are likely to
1) defend [your] cultural worldviews more strongly.
(here e.g. denial of death via cryonics) thus I objectively risk karma.
This is the reason I started this post with a positive confirmation. I hacked you dammit. I used this fact:
9) Defending any of these things (relationships, beliefs, etc) prior to being reminded of death, or taking away people’s anxiety, reduces the effects that mortality thoughts have.
Western thinking of coping with death is confused with beliefs of coping with death. Probably due to the above effect itself.
We seem to believe that (when you read this ask yourself: Do you agree with this?)
Bereaved persons are expected to exhibit significant distress following a major loss, and the failure to experience such distress tends to be seen as indicative of a problem.
Positive emotions are implicitly assumed to be absent during this period. If they are expressed, they tend to be viewed as an indication that people are denying or covering up their distress.
Following the loss of a loved one, the bereaved must confront and “work through” their feelings about the loss. Efforts to avoid or deny feelings are maladaptive in the long run.
It is important for the bereaved to break down their attachment to the deceased loved one.
Within a year or two, the bereaved will be able to come to terms with what has happened, recover from the loss, and resume their earlier level of functioning.
Do you agree?
Yes?
No! These are all Myths of coping with death!
It is true that
Unlike many stressful life experiences, bereavement cannot be altered by the coping efforts of survivors. Indeed, the major coping task faced by the bereaved is to reconcile themselves to a situation that cannot be changed and find a way to carry on with their own lives.
But this doesn’t mean that is must always hurt and take long.
Biases and Death
Thus from our society and being human we are bound to believe that (we should believe that) death is horrible and we should suffer from encountering it.
For an efficiently working brain (that is set on the track of avoiding death at all cost) it is not hard to spot patterns that support the view that death is only bad.
This means that among all topics you are most likely to fall prey to one bias or other with respect to death memes e.g.
availability heuristics (arguments against death are mucb more available obviously)
confirmation bias (you already believe death to be bad)
belief bias and wishful thinking (you want death to be bad)
attentional bias (thought of death can be salient; they are actually continuous repeated by the media)
and even illusion of control (the belief that you can cheat death)
There are probably lots others. Take finding them as a homework (or chance for a comment).
Coping with Death Adaptively
But death and loss may not be as devasting as you make it.
In particular according to Nordanger (2007)
Epidemiological studies indicate that the majority of trauma survivors recover from initial posttraumatic reactions without professional help and their posttraumatic adjustment may be facilitated by indigenous coping resources and socioeconomic structures such as traditional healers, traditions and rituals.
and Zautra 2010
It would be most consistent with what we observe in human communities to see resilience as a natural capacity to recover and perhaps even further one’s adaptive capacities
I also understand that indigenous tribes which are more acutely affected by harm and death do not suffer the same way from it we do.
Can it be that anti-deathism is a foul meme we acquired when technology ‘robbed’ us of ‘natural’ experience of death?
With this I close the coping section and move on to the actual benefits.
Evolution of Aging
The Wikipedia article on aging states that
The evolutionary origin of senescence remains a fundamental unsolved problem in biology.
But gives some hints as to its origin: New results on the old disposable soma theory and new group selection theories of aging.
Following up on that you can find that it is likely adaptive even if there is not yet consensus about this.
For example after Joshua Mitteldorf has
summarized a diverse body of data indicating that senescence is an adaptation selected for its own sake.
he goes on to that
the proposed benefit is that senescence protects against infectious epidemics by controlling population density and increasing diversity of the host population.
and find evidence that
Senescence benefits the rate of evolution, increases diversity, and shortens the effective generation time.
Note that this biological argument also applies to memes.
You can have ‘infectious diseases’ of the mind which in a technological society may dominate the biological effects.
Applying this principle to science might mean that without death scienctific progress might go slower—something we have been already told:
A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.
(Max Planck in his Autobiography)
Risk Aversion and Mediocrity
This section gives my personal opinion on risk aversion in our society.
Technological progress in the last century has worked hard on satisfying basic needs. What remains are complex social needs and existential fears.
Fear of death has led to what I believe overly protecting children (and adults). For fear of injury or abuse children often no longer have the chance to
aquire basic motor skills (balance, climbing, even running)
follow their curiosity to explore (animals, chemical/physical experiments, geography)
aquire social skills (talk to strangers, meet with friends)
train immune defenses (playing in/eat dirt, pets)
A comparable list could also be given for adults. Please feel free to comment on this.
All of this protection surely leads to some (minor?) reduction of health risks. But all of this also leads to a reduction of efficiency. Some of this protection even pose other (longer term) health risks which are less salient (yet?). This is a promotion of mediocrity. Sometimes I think our society could benefit from a bit more harm. Wouldn’t we value life more and make more out of it?
Even if you do not agree with me on this one, maybe you do on the following.
Risk Aversion and Death
Sometimes it is necessary to Make an Extraordinary Effort or even to Shut up and do the impossible! This implies setting aside some of your mental barriers. Barriers that protect you from danger, exhaustion and possibly death (not necessarily immediate but possibly speeding up senescence).
Some say that there are areas where this may be necessary:
if nobody [] dies for space exploration we are cheating humanity. We are just not trying hard enough to get off this planet and into space. I firmly believe that moving into space is really important to the future of my species. We are going to penetrate space and become a space-faring race or we are going to stagnate and pass on.”
http://leepers.us/mtvoid/2003/VOID0207.htm (section Acceptable risk)
They used to say “if people are not dying, we’re not trying hard enough”.
http://forum.nasaspaceflight.com/index.php?topic=31452.30
This may also apply for other human endeavors.
Death and Transhumanism
Now that we have reached the edge of human progress I want to drive my argument a bit beyond its applicability. The evolutionary biological benefit of senescence and death may not apply once humans can fully engineer biology. What if we “if we knew more, thought faster, were more the people we wished we were”? Does this stop the argument? Any group-benefit argument continues to apply if a population of distinct minds remains. If the minds incorporate mutual experience than the minds either converge to multiple identical minds or the minds maintain a difference in which case the group benefit argument may continue to hold.
Independent of whether you want to avoid becoming identical to all other minds—being a single mind makes it a single point of failure. Death—of a certain kind—may be necessary even for parts of a super intelligence.
References
Mentioned above and some more:
Wikipedia: Terror Management Theory
Discourses of Loss and Bereavement in Tigray, Ethiopia, Dag Nordanger, 2007
Resilience: A New Definition of Health for People and Communities, Zautra 2010
Wikipedia: Evolution of aging
Senescence as an adaptation to limit the spread of disease, Joshua Mitteldorf, 2009
More from Mitteldorf here
Robustness and Aging – A Systems-Level Perspective, Kriete 2013
For background you might consult the Baseline of my opinion on LW topics.
Summary
For the TLDR crowd:
Humans have powerful mental adaptations to cope with death/loss (they often actually learn from it and get out of it stronger).
Death/senescence/loss is adaptive for the group providing real benefits an utilitarian should see and build on.
- 3 Jan 2017 10:09 UTC; 4 points) 's comment on Deconstructing overpopulation for life extensionists by (
- 20 Feb 2014 6:37 UTC; 3 points) 's comment on Open Thread for February 18-24 2014 by (
- 16 Feb 2014 20:30 UTC; 2 points) 's comment on Thoughts on Death by (
- 3 Dec 2021 11:01 UTC; 1 point) 's comment on Second-order selection against the immortal by (
I think that this post conflates two issues, and is an example of a flaw of reasoning that goes like this:
Alice: It would be good if we could change [thing X].
Bob: Ah, but if we changed X, then problems A, B, and C would ensue! Therefore, it would not be good if we could change X.
Bob is confusing the desirability of the change with the prudence of the change. Alice isn’t necessarily saying that we should make the change she’s proposing. She’s saying it would be good if we could do so. But Bob immediately jumps to examining what problems would ensue if we changed X, decides that changing X would be imprudent, and concludes from this that it would also be undesirable.
But that last step is entirely groundless. Something could be a bad idea in practice due to implementation difficulties, but very desirable. These are orthogonal considerations. (Another way to think about it is: the consequences of making a change, vs. the consequences of the means used to implement said change.)
I think that Bob’s mistake is rooted in the fact that he is treating Alice’s proposal as, essentially, a wish made to a genie. “Oh great genie,” says Alice, “please make it so that death is no more!” Bob, horrified, stops Alice before she can finish speaking, and shouts “No! Think of all the ways the words of your wish can be twisted! Think of the unintended consequences! You haven’t considered the implications! No, Alice, you must not make such grand wishes of a genie, for they will inevitably go awry.”
The view here on Lesswrong, on the other hand, treats Alice’s proposal as an engineering challenge. The conversation in that style goes like this:
Alice: It would be good if we could change [thing X].
Chris: Hm, I concur that this would be good if we could do it. However, consider problem A, which would arise as a result.
Alice: I think that solution J would handle that acceptably.
Chris: That seems reasonable. But, there is also problem B to deal with.
Alice: It may seem like that at first, but actually that won’t be a problem because [reason K].
Chris: Ah, I see. It occurs to me that C will also be problematic.
Alice: Hmm, you’re right. That will be a challenge; I will have to give that some serious thought.
Chris: Please do! It would be very nice if you could solve it, because then we could make change X, which we both agree would be great.
Once you properly distinguish the concepts of desirability and prudence, you can treat problems with your proposal as obstacles to overcome, not reasons not to do it. So a real “defense of deathism” would have to argue that death is desirable; that immortality is not something we would or should want, even if we solved all the auxiliary problems. Otherwise, it fails to engage with the core of the anti-death position.
Exactly. A confusion between terminal values and instrumental values.
Death is a trade-off we are sometimes willing to make, despite it being emotionally difficult. For example, we accept the risk of death in space exploration.
If instead we valued Death as an intrinsic good, we would be making the hard trade-offs in the opposite direction. For example we would stop the space program saying: “Sure, never learning about the space sucks, and as curious people we are naturally disappointed, but on the other hand, without space program we are all guaranteed to die when the Sun explodes, which is a great thing! We don’t want to risk anyone surviving (oh, the horror!), just because we were once unable to stop our curiosity.”
Or, as a more personal example, people would be willing to sell their cars and houses just to increase the hope that their loved ones won’t survive the illness and return to normal life. Doctors would be considered the greatest villains, and hanging someone would be consider a reward for their noble acts.
Not necessarily. We could value both life and death; we might then want to live some amount of years (or live long enough such that we accomplish some amount of “living”, i.e. derive the desired amount of benefit out of life), and then die. Dying before getting all the life we want out of life would leave our desire for life unsatisfied, while continuing to live longer than that would stop us from satisfying our desire for death.
A (possibly equivalent) formulation might be to say that we derive diminishing marginal utility from life, such that as some point it is outweighed by the utility we derive from death.
I’m not sure why you think that the post confuses these points.
What I see from these first comments is that a definition of deathism that could be defended is needed. Obvious in retrospect.
One proposal (with hindsight) would be
What does it mean for something to be “beneficial to a society”, apart from any benefits it has to the individuals in that society?
My death would be very bad for me, personally. How does the “benefit to society” of my death cash out as benefit to me?
What does it mean to benefit a person, apart from benefits to the individual cells in that person’s body? I don’t think it’s unreasonable to think of society as having emergent goals, and fulfilling those goals would benefit it.
I actually do think it is unreasonable to take any but the physical stance toward society; the predictive power of taking the intentional stance (or the design stance, for that matter) is just less.
But! We might assume, for the sake of argument, that we can think of society as having emergent goals, goals that do not benefit its members (or do not benefit a majority of its members, or something). In that case, however, my question is:
Why should I care?
Society’s emergent goals can go take a flying leap, as can evolution’s goals, the goals of my genes, the goals of the human species, and any other goals of any other entity that is not me or the people I care about.
Hmm, I’ll have to look into the predictive power thing, and the tradeoff between predictive power and efficiency. I figured viewing society as an organism would drastically improve computational efficiency over trying to reason about and then aggregate individual people’s preferences, so that any drop in predictive power might be worth it. But I’m not sure I’ve seen evidence in either direction; I just assumed it based on analogy and priors.
As for why you should care, I don’t think you should, necessarily, if you don’t already. But I think for a lot of people, serving some kind of emergent structure or higher ideal is an important source of existential fulfillment.
Sorry, when I said “predictive power”, I was actually assuming normalization for efficiency. That is, my claim that the total predictive capacity you get for your available computation resources is greatest by taking the physical stance in this case.
Oh, and:
It is almost trivial to imagine such a thing. For example, my body may be destroyed utterly in the process of transferring my mind, unharmed, into a new, artificial body, better in every way than my old one. This would be great for me (assuming the new body suited my wants and needs), but bad for the cells making up my existing body.
The core idea here is that I am not my body. I am currently instantiated in my body, but that’s not the same thing. I care about my current instantiation only to the degree that doing so is necessary for me to survive and prosper.
Ah. I’m not sure I agree with you on the nature of the self. What evidence do you have that your mind could be instantiated in a different medium and still lead to the same subjective experience? (Or is subjective experience irrelevant to your definition of self?)
I mean, I don’t necessarily disagree with this kind of dualism; it seems possible, even given what I know about embodied cognition. I just am not sure how it could be tested scientifically.
No direct evidence, just the totality of what we currently know about the mind (i.e. cognitive science). Subjective experience is not irrelevant, though I am still confused about its nature. I don’t, however, have any reason to believe that it’s tied to any particular instantiation.
I don’t think my view can properly be characterized as dualism. I don’t posit any sort of nonmaterial properties of mind, for instance, nor that the mind itself is some nonmaterial substance. Computationalism merely says, essentially, that “the mind is what the brain does”, and that other physical substrates can perform the same computation.
Everything that I know about the idea of embodied cognition leads me to conclude that it is a brand of mysticism. I’ve never heard a cogent argument for why embodiment can’t be simulated on some suitable level.
Hmm, I can see arguments for and against calling computationalism a form of dualism. I don’t think it matters much, so I’ll accept your claim that it’s not.
As for embodied cognition, most of what I know about it comes from reading Lawrence Shapiro’s book Embodied Cognition. I was much less impressed with the field after reading that book, but I do think the general idea is important, that it’s a mistake to think of the mind and body as separate things, and that in order to study cognition we have to take the body into consideration.
I agree that embodiment could be simulated. But I don’t like to make assumptions about how subjective experience works, and for all I know, it arises from some subtrates of cognition but not others. Since I think of my subjective experience as an essential part of my self, this seems important.
I agree. I don’t think embodiment is irrelevant; my own field (human-computer interaction) takes embodiment quite seriously — it’s an absolutely integral factor in natural user interface design, for example.
I just don’t think embodiment is in any way magic, the way that the embodied cognition people seem to think and imply. If you can simulate a human and their environment on any level you like, then embodiment stops being an issue. It seems like we don’t actually disagree on this.
This is certainly not impossible, but it’s not clear to me why you couldn’t then simulate the substrate on a sufficiently low level as to capture whatever aspect of the substrate is responsible for enabling cognition. After all, we could in principle simulate the entire universe down to quantum configuration distributions, right?
If you wanted to make a weaker claim based on computational tractability, then that would of course be another thing.
I concur with this. To the extent that I have any kind of a handle of what subjective experience even is, it does seem quite important.
P.S.
Yeah, this is probably a question of preferred terminology and I am not inclined to argue about it too much; I just wanted to clarify my actual views.
Sure you can (and have to) take the body with you into the simulation—but then the (biological) rules governing the body still apply. You may have more control over them though.
It doesn’t.
Except in the same way as altruism does.
Ok. So, my death benefits me insofar as I care about other people (altruism) and my death benefits those people.
Obvious next question: how does my death benefit the people I care about?
Roughly delineated, I care about my family and friends immensely; casual acquaintances / colleagues / peers / etc. a good amount; and other people roughly to the degree that they share my culture (variously defined).
Thus, for death to benefit me via altruism, it would have to benefit my family and friends a lot (to offset the great loss they would feel at my death); and/or benefit casual acquaintances / colleagues / peers a pretty large amount (to compensate for the discount factor of how much I care about them); and/or benefit various other people in the world an almost astronomical total amount (ditto).
Does my death in fact do this? I can imagine certain specific scenarios where one or more of these things is the case, such as if my death serves to save my family, or save the world from destruction. However, death in common circumstances does not seem to fit the aforementioned criteria for me to judge it a net positive.
If you have children who in turn plan on having children, then you might be inclined to die eventually in order to secure more resources for your progeny. Similarly if you have close friends with children.
Of course, I’d prefer to incentivize having fewer children and make it harder to have children as a means to control population. If death were necessary, I’d recommend a lottery (assuming there are insufficient volunteers).
Directly? It doesn’t. It only benefits society at large and even then only in so far as it conveys a fitness advantage. Same as sexual reproduction does. The obvious difference being that sexual mechanisms are visible as motivations on your conscious level but aging mechanisms stop at the biological level. But denying it’s operation will not work same as an AI couldn’t deny the operation of it hardware (at least not without conferring comparable disadvantages),
Once again, what does it mean for something to benefit society, apart from any benefit to individuals in that society?
If something doesn’t benefit any of the people I care about directly, at all, then how can it benefit society, which is made up of those people?
Are you familiar with the concept of the selfish gene?
The problem of “old people will be close-minded and it will be harder for new ideas to gain a foothold” seems pretty inherent in abolishing death, and not just an implementation detail we can work around.
I think that the closed-mindedness of elderly people is more likely cultural than a biological fact of humanity. While the cliche runs that science progresses one death at a time, in my experience, old scientists usually have discarded and continue to discard great numbers of once-popular ideas. Science as a process gives people a mechanism for rejecting old ideas, and on the whole it’s pretty effective.
Lacking effective mechanisms for changing their mind, people in general do not need to get old in order to become closed-minded.
Really? It doesn’t seem to you like the program of studying cognitive biases, and finding ways to overcome them, can have any impact on this? What about the whole “modifying our minds” bit — enhancing our intelligence, and fixing cognitive glitches, in assorted biological and technological ways? That seems like it might have some effect, no?
I think the issue isn’t so much that old people are close minded compared to young and middle-aged people, as that young people are very open-,minded compared to middle-aged and old people.
Also, an advantage of aging and death is that the people at the top of hierarchies get changed.
Indeed. It seems like our goal should be to optimize the level of open-mindedness in everyone, appropriately for their status in society and other considerations.
Also true, however this is only an advantage insofar as otherwise, without death, the people at the top of hierarchies don’t get changed. It seems to me that our goal should be to avoid having hierarchies that work in such a fashion. Of course, that is a difficult project, but it’s not obvious to me that it’s an impossible one.
It has an impact. But it doesn’t invalidate the argument. Just move the balance a bit.
Other paraphrases:
Might edit in my thoughts on the post later on. Probably not in depth, the Cliff notes would go something like
Naturally humans developed coping mechanisms to deal with the inevitability of dying. It would be surprising to find such mitigation methods not only diminishing the negative impact of someone close to you dying, but to actually change the sign (“get out of it stronger”). Like a car bumper which not only lessens crash damage, but actually improves the car upon impact.
Indeed, death is an integral part of the natural selection cycle, and may have various societal benefits. Let’s assume that was the whole of it, hypothetically, no negative societal downsides. So now what? Now this: I don’t care. I want neither me nor my loved ones to die, and societal consequences be damned. That doesn’t mean that ceteris paribus I don’t want to benefit society (I do), but not given the price of family dying, in which case ceteris ain’t paribus, as the saying goes.
The real Achilles’ heel of anti-deathism I’d see in the fuzzy definitions of identity / continuity of one’s personal identity / continuity of consciousness in the first place. It’s not reasoning via Sorites Paradox (“if you can’t exactly delineate heaps of sand versus grains of sands, then no heaps can exist”), it’s pointing out that commonly accepted concepts integral to anti-deathism are so fuzzy and contradictory that once you start pulling out the fuzzy strands, the whole tapestry is unraveled. The teleporter problem does to common concepts of identity what Parfit’s does to layman decision theories—it fucks them.
Getting stuck on certain ideas might be just a side effect of the aging/degrading brain and not necessarily depend on the years lived with an idea. Of course, if you live years with confirmation bias and sunk costs et al, you might have a lot of accumulated bias to counter. I think sunk costs is the most important psychological explanation for this phenomenom if biology is out, but might not make much sense to an immortal.
I don’t see these biases tied to biology exclusively. Basically these are plausible heuristics and optimizations of cognitive and social processes. Some mechanisms are needed. And even with much better algorithms you might still got stuck in local optimum.
You have given reasons why death can provide utility. You have not established how these reasons comparatively outweigh the vast disutility of death, or that similar utility could not be gained without death.
I’ll give the classical rebuttal.
If you were immortal, and I mean “unaging and healing”, in a society of immortals, and reasonably adapted to immortality—would your post convince you to give death a try? I think you know the answer.
A general comment: “classic rebuttals” don’t work as well as you expect them to, because the person whose argument you are rebutting is likely already familiar with the rebuttal, yet persists in their “obvious wrongness”.
Yes, if the alternative is being overrun by short-lived but dynamic invaders who don’t give a damn about individual life. Whether this is indeed an alternative or just a nightmare scenario depends on many factors worth analyzing, calculating and modeling, not just yelling “shut up, deathist!”.
In that situation, I suspect there are very, very many alternatives that would have to be tried and discarded before death as a concept even rose to your consideration. We are biased to consider it first because we already live with it. But that’s the point of the argument—imagine death really is alien to you.
That would not be this universe, where everything so far, alive or not, has a beginning and an end.
From my point of view death is not a single solution but one end of a spectrum of solutions. I thought I had made that clear enough.
Difficult to parse. Obviously you don’t mean “not knowing about death”. You might mean “not taken as unavoidable unquestioned part of life”.
Moved to Discussion (reason: downvoted)
Lemme quote some more from Orwell’s Politics and the English Language.
...
If you wanted to argue for aging separately from death, it would be fine to pick a word that dodged our emotional reactions to death. But you are arguing for death, and (by your own admission even) substituting something else because our emotional reaction to death bad.
We’re arguing over a matter of ethics. Of course emotions are going to be involved. Ultimately everything here reduces to “this policy will help us achieve ends which were chosen by things like our emotions.” This substitution seems transparently like an attempt to hack me, and I might be more charitable about this, but you have already admitted to another attempt to hack me.
From “Feeling Rational”,
If I am okay with death because I envision aging instead, that feeling is irrational. It can be destroyed by the truth if I envision what I am actually thinking about.
I doubt many anti-deathist LessWrongers think death has absolutely no good consequences. It’s just that being made tougher by people we love dying is not worth people we love dying. Being driven to do things faster and be less mediocre (if that indeed happens, which I kind of doubt, and find your folk psychology arguments for unconvincing) because we know we’re running out of time until we die is not worth running out of time and dying. Making science go faster is not worth killing literally everyone. If someone is risk-averse because they have more to lose, taking away the thing they are trying to protect is not helping them. We could much more easily and effectively protect against diseases by limiting travel over long distances, than by killing off people whose genes are too common. Even if this group selection hypothesis is true, there are evolutionary adaptive things we don’t want to have in a modern civilization. And there are things that were adaptive in the ancestral environment that aren’t adaptive anymore.
I apologize for hacking you. I would have hoped that it to be understood as harmless and helpful. Nonetheless I apologize for not seeing that it was invasive.
Probably.
I now see that I wrote the post in a state of rejection of unbalanced positivism I saw in LW posts. I should have written it as a pro-contra piece that would have argued for a balance better than trying to move toward a balance which is bound to trigger counterforce.
I probably shouldn’t have written it. I already wondered if there should be a warning of certain topics in LW. Like the no-politics policy.
They are not convicing. I shouldn’t have used them.
Agreed. Group selection benefits of aging apply beyond genes though.
What do you mean by aging?
The passage of time without dying.
Accumulating decreptitude.
Accumulating experience, knowledge, expertise, wisdom.
These are different things that are only contingently related to each other, but conflating them lets you make arguments that fall apart once one notices the equivocation on “aging”. “Aging(3) is good, but aging(2) ends in death, therefore death is good.”
Interesting point. In the context of this post it is 2 (though you shouldn’t have chosen a derogative).
I don’t think someone would name 1 when asked about the biological (or colloquial) meaning of aging. And 3 would seldom be cited alone but probably more often as compensation aspect of 2.
“Decreptitude” is descriptive, not derogative. Decrepit: wasted and weakened by or as if by the infirmities of old age (Merriam-Webster Online).
That’s decrepitude. Your use with “t” implied a derogative with “creep”. But it seems to have been a typo. Sorry for misinterpreting this.
I have to admit that I didn’t follow the argument as an argument—it seemed like a number of disconnected points. Some of them seemed more like sour grapes, some like salvaging some good out of a mostly-bad event. I appreciate the references.
The first third of the post should be deleted, or put in discussion—it’s a defense of talking about the subject, and a rebuttal of other posts, not anything that stands on it’s own. But since it includes a threat that I’ll feel bad if I downvote before reading, I had to read it.
For your first actual point, you seem to say that adaptation has made death less bad than it might be, but completely fail to say it’s good or necessary. Still, this may expand into a good post, if presented as “psychological benefits of (other peoples’) death”.
For your second, that the group is stronger if individuals die, I think it’s a reasonable position to take, and could also make a good post. That post should focus on why death at the level of human individual is the right kind of death. Subpersonality death, by updates of beliefs and of terminal values by agents seems preferable to me, both because it’s faster and because it’s less noticeable by the participants. I think that’s what you’re trying to say in your last section, but that’s so contradictory to your previous use of “death” that I can’t tell.
If you’re really going to promote death, you should begin work toward a theory of when an individual should choose to die, in order to create a better environment for the people who are … still alive.
Thanks for this post. I basically agree with you, and it’s very nice to see this here, given how one-sided LW’s discussion on death usually is.
I agree with you that the death of individual humans is important for the societal suporganism because it keeps us from stagnating. But even if that weren’t true, I would still strongly believe in the value of accepting death, for pretty much exactly the reasons you mentioned. Like you, I also suspect that modern society’s sheltering, both of children and adults, is leading to our obsession with preventing death and our excessive risk aversion, and I think that in order to lead emotionally healthy lives, we need to accept risk of failure, pain, and even death. Based on experience and things I’ve read, I suspect we all have much deeper reserves of strength than we realize, but this strength is only called on in truly dire circumstances, because it’s so costly to use. If we never put ourselves into these dire circumstances, we will never accomplish the extraordinary. And if we’re afraid to put ourselves in such circumstances, then our personal growth will be stunted by fear.
I say this as someone who was raised by very risk-averse parents who always focused on the worst-case scenario. (For instance, when I was about seven, I was scratching a mosquito bite, and my dad said to me “you shouldn’t scratch mosquito bites, because one time your grandfather was cleaning the drain, and he scratched a mosquito bite with drain gunk on his hand, and his arm swelled up black and he had to go to the hospital”.) As a kid I was terrified of doing much of anything. It wasn’t until my late teens and early twenties that I started to learn how to accept risk, uncertainty, and even death. Learning to accept these things gave me huge emotional benefits—I felt light and free, like a weight had been lifted. Once I had accepted risk, I spent a summer traveling, and went on adventures that everyone told me were terrible ideas, but I returned from them intact and now look back on that summer as the best time in my entire life. I really like the saying that “the coward dies a thousand deaths, the brave man only one”. It fits very well with my own experience.
I’m hesitant to say that death is objectively “good” or “bad”. (I might even classify the question as meaningless.) It seems like, as technology improves, we will inevitably use it to forestall or even prevent death. Should this happen, I think it will be very important to accept the lack of death, just as now it’s important to accept death. And I’m not really opposed to all forms of anti-deathism; I occasionally hear people say things like “Life is so much fun, why wouldn’t I want to keep doing it forever?”. That doesn’t seem especially problematic to me, because it’s not driven by fear. What I object to is this idea that death is the worst thing ever, and obviously anyone who is rational would put a lot of money and effort into preventing it, so anyone who doesn’t is just failing to follow their beliefs and desires to the logical conclusion. So it’s really nice to see this post here, amidst the usual anti-deathism. Thanks again for writing it.
Ok, first of all, I do have to ask: are you a non-native English speaker? The mechanics here could use a little work iff you’re native, but if you’re non-native, they’re quite good and the flaws are easily forgivable.
Now, main point: personally, I see no point going on Grand Religious Crusades about death—for or against. If someone really, truly does want to die (and there are plenty of days I think, “how could I ever put up with this shit we call life forever!?” myself), we should let them. If someone really, truly does not want to die, and isn’t doing any damage to anything we consider ethically valuable by not-dying (ie: other people, ecological sustainability, the things I’m forgetting right now but will think of later), then we should be developing ways to enable that preference.
There’s really no need to enforce a single choice for everyone. There’s no need for huge flamewars about it, or at least, not massive flamewars here, where it is automatically presumed that dying is a value decision people should have the opportunity to make for themselves.
I’m non-native. German actually. I know of some flaws I make esp. if I’m in a hast. Comma placement being one.
Like I said, for a nonnative speaker, you’re quite good, so the message got across.
Now I’m curious. I’d like to improve. What ‘flaws’ stand out?
It looks pretty good to me.
One minor thing: “A” is normally the indefinite article used before utilitarian, as in “A utilitarian should...” rather than “An utilitarian should....”
The “u” in utilitarian is pronounced like “you” not “oo.”
Yes. Such happens when you speak only 1/100th or what you read.
So the harder we are to kill, the harder we can try. This example argues in the opposite direction from what you present it as.
If you ain’t cheatin, you ain’t tryin.
I think the intent there was that we should be pushing for space exploration quickly, and we can accept a relatively high risk of death in order to get the benefits sooner. I doubt the person who said that would want astronauts to kill themselves if nobody’s died for space exploration in the past decade, or to engage in increasingly risky behaviors without commensurate rewards.
To those who think that death should be a choice. What about the benefits of knowing that we are mortal, which death by choice doesn’t allow for. e.g. as a counter force to arrogance and as a force to act now, and so as we age to start reevaluating our priorities, in other words, the benefits while we live to knowing that we are mortal may outweigh the benefit of immortality. I suspect these concerns have been dealt with on this site, so if they have feel free to link me to an appropriate post instead of writing a new response,
Some commentary on the matter is here: How to Seem (and Be) Deep.
Thank you, but that post doesn’t seem to answer my question, since it doesn’t take up how death interplays with our cognitive biases. I agree that if we were perfectly rational beings immortality would be great, however I don’t see how that implies that considering our current state that the choice to live forever (or a really long time) would be in our best interest.
Similarly I don’t see how that argument indicates that we should develop longevity technologies until we solve the problem of human irrationality and evil. For example, would having a technology to live 150 years cause more benefit or would it cause wars over who gets to use the technology?
You keep using the words “we” and “our”, but “we” don’t have lifespans; individual humans do. So the relevant questions, it seems to me, are: is removing the current cap on lifespan in the interest of any given individual? And: is removing the current cap on lifespan, for all individuals who wish it removed, in the interests of other individuals in their (family, country, society, culture, world)?
Those are different questions. Likewise, the choice to make immortality available to anyone who wants it, and the choice to actually continue living, are two different choices. (Actually, the latter is an infinite sequence[1] of choices.)
No one is necessarily claiming that we should. Like I say in my top-level comment, this is a perfectly valid question, one which we would do well to consider in the process of solving the engineering challenge that is human lifespan.
[1] Maybe. Someone with a better-exercised grasp of calculus correct me if I’m wrong — if I’m potentially making the choice continuously at all times, can it still be represented as an infinite sequence?
“You keep using the words “we” and “our”, but “we” don’t have lifespans; individual humans do.” Of course, but “we” is common shorthand for decisions which are made at the level of society, even though that is a collection of individual decisions (e.g. should we build a bridge, or should we legalize marijuana). Do you think that using standard english expressions is problematic? (I agree that both the question of benefit for the self and benefit for others is important and think the issue of cognitive biases is relevant to both of them)
I just looked at your comment, and I agree with that argument, but that hasn’t been my impression of the view of many on this site (and clearly isn’t the view of researchers like De Grey), however I am relatively new here and may be mistaken about that. Thank you for clarifying.
I don’t think anyone’s willing to fight a war just to prevent another country’s life expectancy from increasing.
Maybe, but on the other hand there is inequity aversion: http://en.wikipedia.org/wiki/Inequity_aversion
Also there is the possibility of fighting over the resources to use that technology (either within society or without). Do you disagree with the general idea that without greater rationality extreme longevity will not necessarily be beneficial or do you only disagree with the example?
That sounds more like something that would motivate the side that’s not already long-lived. They’d already have plenty of motivation. I’m saying the country that has access to the tech but wants to restrict is isn’t going to have the will to fight.
Well, “not necessarily be beneficial” strictly means “is not certain to be beneficial”, but connotationally means “is likely enough to prove not-beneficial that we shouldn’t do it”, so I ADBOC—it’s conceivable that it could go wrong, but I think it’s likely enough to have a beneficial enough outcome that we should do it anyway.
yes and that was the meaning of my initial comment, and that is a concern in today’s world where we do have limited resources so that not everyone would be able to make use of such a technology. The country that has it (or the subset of people that have it within one country) will be motivated to defend their resources necessary to use it., This isn’t an argument against such research in a world without any scarcity, but that isn’t our world.
I am still not sure whether it is likely to be more beneficial or not for heavily emotional and biased humans like us.
Does that even work? I’m thinking that an arrogant person will generally shrug off the mortality thing and go on with being arrogant, barring some near-death experience.
Or at least “this decade” rather than “some day”. But death seems like a steep cost for this benefit. Is there another way to get it? Like, if we’ve got immortal people anyway, we’re going to want to have a retirement equivalent, but it won’t be a matter of working forty years and taking the rest of your life off. What if we had a system whereby people took ten years off work after every thirty or so, with a guaranteed salary during that time that’s more than sufficient for living? Then you would have a specific timeframe in which you are expected to relax, take long vacations, knock off a life goal or two, that sort of thing.
That requires reworking social security / state pensions and probably requires a lot more wealth in general to enact. But we don’t currently have a cure for death, so there’s time to work out how to deal with a lack of death and enact those policies.
We are all arrogant to some degree or another, knowledge of or mortality helps keep it in check. What would the world look like with an unrestrained god complex?
Taking 10 years off after 30 years doesn’t seem to solve the problem of the psychological issue, in today’s world, as we get older we start noticing the weakness of our bodies which push us to act, since “if not now, when”.
Unless we solve the various cognitive biases we suffer from, extreme longevity seems like a mixed blessing at best, and it seems to me that it would cause more problems than it solves.
I agree that these arguments don’t decide the issue, but the counter argument of letting people choose doesn’t seem to me effective. Also, arguments about how we would be superbeings who are totally rational, may be applicable to some post-human existence, but would not help the argument that longevity research should be pursued today (since, e.g. there would likely be wars over who gets to use it which might kill even more people, as we see in the world today the problem with world hunger and disease is not primarily one of lack of technological or economic ability but rather one of sociopolitical institutions)
Do we have any evidence regarding this? I know there are parables serving to emphasize humility due to mortality, but I have no information on their effectiveness. It seems like it needs some immediacy to be effective, which means it only takes place when you start feeling old—I’m guessing this will be forties to sixties for most Westerners.
A well-funded, extended retirement is a perfect opportunity to do all the things you haven’t had time to do while working. The threat of having to work for another few decades should be a reasonable proxy for the fear of death.
Specifically, people don’t tell themselves they’ll put things off for thirty years until the next retirement phase; they tell themselves they’ll do it eventually. Thirty years is subjectively a very long time, and people won’t be inclined to happily delay for that long.
are not included in anything I said here. My suggestion would require large societal changes and provides no mechanism to enact them, but it accounts for normal people, not rational agents.
I would have to look around to see if there is non-anecdotal evidence, but anecdotally ~40 is when I have heard people start mentioning it.
I don’t think your proposal would work since I don’t think the time factor is the biggest issue, How often do people make big plans for summer vacation and not actually do them? They probably wouldn’t say “I’ll put it off for thirty years”, but rather repeatedly say ” I’ll put it off till tomorrow” .
And then they get a reminder that they only have a year left before they go back to work. And then they get a reminder that they only have six months left. Then three months. At that point, the time crunch is palpable. They have a concrete deadline, not a nebulous one.
And if they miss it? Well, they’ve learned for next time. That’s an option unavailable to a dead person.
That doesn’t strike me as how psychology works, since in the real world people often repeatedly make the same mistakes. It also seems that even if your proposal would work, it doesn’t address the original issue since you are assuming that the person has a clear idea of his goals and only needs time to pursue them, whereas I think the bigger issue which aging encourages is reorienting ones values.
I appreciate your taking the time to address my question, but it seems to me that this conversation isn’t really making progress so I will probably not respond to future comments on this thread. Thank you
I find the formatting of this article distracting. What is it with this one-paragraph-per-sentence approach?
More importantly, I find this article rather repulsive in the way it argues; “if we’re not dying, we’re not trying hard enough”? “being wary of death leads to mediocre living”? “growing old is useful”? This all sounds to me like weak, half-hearted rationalizations.
Neither of them seem to be solid arguments against the very simple statement: “every human should be able to live for as long as they want”. If they want to risk their lives for another purpose, that is their prerogative, but death as an acceptable evil that needs to be embraced is an absurd notion. Death is bad. Sometimes it’s less bad than other stuff, but it’s pretty damned bad, as bad things go. How is this not obvious?
This is not an argument but a phrasing of the consequences I cited.
This is no argument either but was explicitly relayed as a personal opinion.
This is objectively proven and thus no rationalization.
The latter is. It just doesn’t cry out: “you selfish individual, how can you rob your tribe of the resources it needs to fight entropy as long as it can”.
It is non-obvious. You just fell into the many traps reflection on death poses that I explicitly mentioned.
They all sounded to me like cases of “Knowing About Biases Can Hurt People”. They all seemed completely irrelevant to the actual badness of death.
We’ll find a way around entropy too. We definitely have the time to try, once we start living forever. More importantly, the value of offsetting entropy versus demanding that individuals cease to exist over and over again is something that’s for the tribe to figure out together, but from where I stand the solution also sounds pretty obvious, Kyuubei.
Maybe it is quite the other way around. That we could rather find a way around entropy (not that I’d believe that) if we didn’t live forever.
How does that even begin to make sense?
Finding solutions is a search process. A computation. It costs entropy (even if you apply reversible computations to a high degree). Entropy is used less efficiently by beings living longer than reproducing and dying beings. Thus our search would require more energy if we were immortal.
Maybe we have enough energy to spend. But maybe we don’t.
On the other hand maybe we should differentiate between immortal knowledge and immortal indentity. I’d rather agree with some kind of the former (with cavats for reversible computing) than with the latter.
You seem to be overlooking something. The entropic consumptions of sapient beings are negligible compared to those of stars, and so is the difference between in entropic consumption between immortality and torch-passing.
You seem to be overlooking something. It may very well be that
a) living in space is inherently more difficult (read: energy-inefficient) that on earth
b) we are already using up earths ressources must faster than they renew
c) it is not in the least guaranteed that we can find energy sources that are significantly more efficient or opening new sources than those that we already have
d) in particular it may be quite impossible to use significant fractions of the energy of the stars
Just in case this comes across as technological pessimism: It isn’t. I’m very well for technological progress. We will need all the progress we can get to optimize efficiency because if we don’t start soon we my have lost more than we can regain later. It may not be true, but somebody has to go that way too. When so8res—knowing that he may be wrong goes int the compartment of (U)FAI, then I on the other hand go into efficiency—also knowing that I may be wrong.
“Indefinite preservation of identity” is a less loaded term than immortality (applause light!) and probably should be used instead when implied in a given context.
Most of your post is not arguments against curing death.
People being risk-averse has nothing to do with anti-aging research and everything to do with individuals not wanting to die...which has always been true (and becomes more true as life expectancy rises and the “average life” becomes more valuable). The same is true for “we should risk more lives for science”.
I agree that people adapt OK to death, but I think you’re poking a strawman; the reason death is bad is because it kills you, not because it makes your friends sad.
I think “death increases diversity” is a good argument. On the other hand, most people who present that argument are thrilled that life expectancy has increased to ~70 from ~30 in ancient history. Why stop at 70?
note: “life expectancy used to be ~30” is a common misconception (it’s being skewed by infant mortality) (life expectancy has gone up a lot, just not that much)
(as far as i know. i’ve been told that it’s a common misconception that this is a common misconception, but they refused to cite sources)
It isn’t. I’m well for curing death. And postponing senescence.
But not without considering the trade-offs.
While I agree with the spirit of this sentiment, I think we should be a bit careful with blanket statements; the fact that my death would make my friends and family sad is definitely an aspect of what makes it bad. My death would still be bad without that aspect, but not quite as bad.
This is loading connotation, just differently. Better is to taboo death or temporarily define it to cover that and only that which is in dispute. This redefinition implicitly marks some of the factors salient to ‘anti-deathists’* as irrelevant. E.g. from an information-theoretic/knowability of local revival perspective, there is a nontrivial probability that there is a huge difference between medical death and senescence; if aging is theoretically reversible (perhaps even for some neurodegenerative conditions) up to x minutes after medical death for and only for x<100, then the difference between x=-10^6 and x=10^6 is huge and qualitative for practical decision-making purposes.
*I dislike ‘anti-deathist’. Maybe I could call myself ‘pro-life’—that’s never been done before, right...?
Consider rot13ing HPMoR spoilers and indicating that the rot13 contains a major spoiler, or just referring to HPMoR generally without pointing to specific parts, or something.
What is your response to the following proposition: that Eliezer is an example of this, and that he is also your canonical anti-deathist, and you must reconcile these observations?
This post talks past at least one significant anti-deathist line, which is that there is a significant probability that one will inductively desire high lifespan. Now, the basic argument is only a first pass and there are legitimate criticisms that require it to be made more rigorous, but it does go through, and this post doesn’t seem to speak much to it. I guess the evolutionary point sort of touches it, but I’m not exactly clear on where exactly it’s coming up against the deathist position so without further clarification it seems sort of non-sequitur/I put really low weight on it.
I’m not clear where you would claim the optimum/optima is/are, but you should be very suspicious if your arguments prop up the status quo on lifespan, given alternative possible extremes. If your arguments would conclude that in a world where everyone died before 25 or a world where everyone lived to 10^12, humans should reject increasing or decreasing their lifespan to our typical ~100 life span, then you are being hit by status quo bias. I pretty much think that that point constitutes nadegiri unto deathism.
Going too abstract like in SaidAchmiz comment risks detaching from the topic. But yes, I could have tried to avoid calling it death. Could, maybe should have bridged the inferential gap to a balanced treatment somehow. But as I said. I bit the bait when asked to write a post on my position.
I thought I provided enough links and references to point out the trade-off made by senescence. Reversing aging and/or postponing death costs entropy. Entropy that could be applied more usefully, altruistically. This opens a whole side topic (complexity limits of life and technology) which I didn’t want to open.
My response seems to be that my whole post seems to be read wrongly. It is a defense of deathism. Not an attack on the opposite. I have seen too many comments and posts here seeing death as something unquestionable bad. I don’t see it as an unquestionable good. Reverse stupidity is not intelligence. I tried to add some facts to this topic. But obviously I failed. Not that I didn’t know the risk.
That is not the same. And it invites politics.
The optimum depends on the trade-offs made by aging versus the group benefits. I have not seen actual calculations in the material I surveyed so I cannot make an estimate. Presumably technology can tilt the balance otherwise we probably wouldn’t already see a change in average life-span. Hypothetical future technology will likely tilt this much further (and singularity being a pole I wouldn’t rule out infinite life-span in general). But as the basic premise—group benefits—will hold as long as there is a population of individuals some aging seems at least likely.
Done. Sorry.
I assumed that the links/references were to studies or other evidence of points made in the post rather than making new points, and since I didn’t find the points in the post convincing so value of information seemed low, didn’t want to open a bunch of new tabs/risk my browser choking on PDF’s, and didn’t feel up to reading more articles, etc. I didn’t look at them. To some extent this was probably true for other anti-deathists who read the post.
I got that. My points is: If you think the anti-deathist mindset leads to cultures not risking lives when they should (e.g. for high-value information), then how do you reconcile that with the observation of anti-deathists working on ventures like x-risk, FAI, Effective Altruism, etc. over extending their own lives?
My honest opinion (and at the risk of this comment coming across as even more misleadingly nasty than it is actually intended) is that ‘just trying to add facts’ does not mean the reaction in the comments to this post was in error, because I think that regardless of what you were aiming to do, you revealed misunderstanding of the anti-deathist position and your thinking came across as confused and possibly even non-sequitur.
Yep; I was kidding, and aware that that would be an obnoxious/inflmmatory title to apply to oneself.
This makes me way more confused as to your position—that’s not a criticism of you giving more information, just me relaying state—and this is starting to feel like one of those times where actual policies or specific opinions should be clarified and policy disagreements identified. Can you quote actual things LW’ers (preferably Eliezer) have said over which you have specific disagreements?
Ah, don’t worry about me; I’d already read HPMoR! :)
Has anyone considered that life may not be that great? Considering that:
Death is by default neutral as you get none of the downsides of life.
One has to cope every day with the fact of being alive (what they call existential problems). Unless you (1) find distractions, (2) find a compelling project to work on, -- and these are temporary solutions subject to Diminishing Returns --, you’ll experience sheer boredom, ennui, anxiety, self-doubt, etc. I suspect this point applies to everyone, even someone otherwise really fortunate in a best-case scenario who has won the lottery.
High potential for disappointment and misfortune.
Half of life is spent working, which may be tiring or stressful.
Doing chores.
Old age increases problems.
Possible depression or other mental disorders.
Possible poverty, be starving.
Able to experience pain.
(Bonus) By eliminating life we also eliminate the possibility of bad AIs instituting worse-than-death scenarios.
Did you know? When you can be hurt at any time, you feel pain more, not less. Similarly, when you can die at any time, you value life less, not more. (Well, I do.)
On Thoughts on Death I refused to defend deathism because
[missing from import]
that is no consolation for someone who has lost a relative to death “my position on death is controversial here” and “the only relatives I lost until now were grandparents” But
[missing from import]
doesn’t apply in this independent post. shouldn’t hold me back in general. I risk loosing karma but I’ll try to defend my view as objectively and scholarly as a can (given one day preparation).
[missing from import]
was countered in the comment by Mestroyer so I wrote this argument that death in general is adaptive.
short response is “yeah, sure, sorta … but only if you’re a stupid group. we can do better.”
edit: http://lesswrong.com/lw/jop/a_defense_of_senexism_deathism/akk3 is the longer version of this response
Just claiming that a smart group might do better is not enough.
About the Spirit of Deathism
I understand that defending deathism is problematic on this forum.
I assume that this is because it is seen as inconsequential, pessimistic and against the spirit of the forum.
I feel that the latter is related to EYs strong opinion on this matter.
I was moved by his account about the death of his brother.
Do I understand as well as he how bad death is?
Do I need to lose a loved one to understand that?
I lost only grandparents until now. I did loose my wife—but not to death, so this presumably doesn’t count.
I will lose my father in law to cancer some time in the future.
The loss of that amazing trove of experience—much like a 1000 year old vampire—will hurt. But he is passing on his legacy to the next generation(s) and he is joking that he is already 20 years older than he expected.
So do I understand death? Am I qualified to question death and by doing so question the resolutions formed when dealing with death?
I can hear EYs experience echo from HPMOR—obgu va gur svtug rtnvafg gur qrzragbef naq gur erfbyhgvba nsgre urezvbarf qrngu. (rot13; links here and here). It moved me to tears. Why doesn’t it move me to fight death? Or does it?